本文解决了多机器人主动信息采集(AIA)问题,其中一组移动机器人通过基础图进行通信,估计一个表达感兴趣现象的隐藏状态。可以在此框架中表达诸如目标跟踪,覆盖范围和大满贯之类的应用程序。但是,现有的方法要么是不可扩展的,因此无法处理动态现象,或者对通信图中的变化不健全。为了应对这些缺点,我们提出了一个信息感知的图形块网络(I-GBNET),即图形神经网络的AIA适应,该网络将信息通过图表表示,并以分布式方式提供顺序决定。通过基于集中抽样的专家求解器训练通过模仿学习训练的I-GBNET表现出置换量比和时间不变性,同时利用了对以前看不见的环境和机器人配置的卓越可扩展性,鲁棒性和概括性。与训练中看到的相比,隐藏状态和更复杂的环境的实验和更复杂的环境实验验证了所提出的体系结构的特性及其在应用定位和动态目标的应用中的功效。
translated by 谷歌翻译
The exploration of mutual-benefit cross-domains has shown great potential toward accurate self-supervised depth estimation. In this work, we revisit feature fusion between depth and semantic information and propose an efficient local adaptive attention method for geometric aware representation enhancement. Instead of building global connections or deforming attention across the feature space without restraint, we bound the spatial interaction within a learnable region of interest. In particular, we leverage geometric cues from semantic information to learn local adaptive bounding boxes to guide unsupervised feature aggregation. The local areas preclude most irrelevant reference points from attention space, yielding more selective feature learning and faster convergence. We naturally extend the paradigm into a multi-head and hierarchic way to enable the information distillation in different semantic levels and improve the feature discriminative ability for fine-grained depth estimation. Extensive experiments on the KITTI dataset show that our proposed method establishes a new state-of-the-art in self-supervised monocular depth estimation task, demonstrating the effectiveness of our approach over former Transformer variants.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译